28 research outputs found

    Exploiting the limitations of spatio-temporal vision for more efficient VR rendering

    Get PDF
    Increasingly higher virtual reality (VR) display resolutions and good-quality anti-aliasing make rendering in VR prohibitively expensive. The generation of these complex frames 90 times per second in a binocular setup demands substantial computational power. Wireless transmission of the frames from the GPU to the VR headset poses another challenge, requiring high-bandwidth dedicated links

    Noise-Aware Merging of High Dynamic Range Image Stacks Without Camera Calibration

    Get PDF
    A near-optimal reconstruction of the radiance of a High Dynamic Range scene from an exposure stack can be obtained by modeling the camera noise distribution. The latent radiance is then estimated using Maximum Likelihood Estimation. But this requires a well-calibrated noise model of the camera, which is difficult to obtain in practice. We show that an unbiased estimation of comparable variance can be obtained with a simpler Poisson noise estimator, which does not require the knowledge of camera-specific noise parameters. We demonstrate this empirically for four different cameras, ranging from a smartphone camera to a full-frame mirrorless camera. Our experimental results are consistent for simulated as well as real images, and across different camera settings

    Trained Perceptual Transform for Quality Assessment of High Dynamic Range Images and Video

    Get PDF
    In this paper, we propose a trained perceptually transform for quality assessment of high dynamic range (HDR) images and video. The transform is used to convert absolute luminance values found in HDR images into perceptually uniform units, which can be used with any standard-dynamic-range metric. The new transform is derived by fitting the parameters of a previously proposed perceptual encoding function to 4 different HDR subjective quality assessment datasets using Bayesian optimization. The new transform combined with a simple peak signal-to-noise ratio measure achieves better prediction performance in cross-dataset validation than existing transforms. We provide Matlab code for our metri

    Dataset and metrics for predicting local visible differences

    Get PDF
    A large number of imaging and computer graphics applications require localized information on the visibility of image distortions. Existing image quality metrics are not suitable for this task as they provide a single quality value per image. Existing visibility metrics produce visual difference maps, and are specifically designed for detecting just noticeable distortions but their predictions are often inaccurate. In this work, we argue that the key reason for this problem is the lack of large image collections with a good coverage of possible distortions that occur in different applications. To address the problem, we collect an extensive dataset of reference and distorted image pairs together with user markings indicating whether distortions are visible or not. We propose a statistical model that is designed for the meaningful interpretation of such data, which is affected by visual search and imprecision of manual marking. We use our dataset for training existing metrics and we demonstrate that their performance significantly improves. We show that our dataset with the proposed statistical model can be used to train a new CNN-based metric, which outperforms the existing solutions. We demonstrate the utility of such a metric in visually lossless JPEG compression, super-resolution and watermarking.</jats:p

    Real-time noise-aware tone mapping

    Get PDF
    Real-time high quality video tone mapping is needed for many applications, such as digital viewfinders in cameras, display algorithms which adapt to ambient light, in-camera processing, rendering engines for video games and video post-processing. We propose a viable solution for these applications by designing a video tone-mapping operator that controls the visibility of the noise, adapts to display and viewing environment, minimizes contrast distortions, preserves or enhances image details, and can be run in real-time on an incoming sequence without any preprocessing. To our knowledge, no existing solution offers all these features. Our novel contributions are: a fast procedure for computing local display-adaptive tone-curves which minimize contrast distortions, a fast method for detail enhancement free from ringing artifacts, and an integrated video tone-mapping solution combining all the above features.This project was funded by the Swedish Foundation for Strategic Research (SSF) through grant IIS11-0081, Linkoping University Center for Industrial Information Technology (CENIIT), the Swedish Research Council through the Linnaeus Environment CADICS, and through COST Action IC1005

    Hybrid-MST: A hybrid active sampling strategy for pairwise preference aggregation

    Get PDF
    In this paper we present a hybrid active sampling strategy for pairwise preference aggregation, which aims at recovering the underlying rating of the test candidates from sparse and noisy pairwise labelling. Our method employs Bayesian optimization framework and Bradley-Terry model to construct the utility function, then to obtain the Expected Information Gain (EIG) of each pair. For computational efficiency, Gaussian-Hermite quadrature is used for estimation of EIG. In this work, a hybrid active sampling strategy is proposed, either using Global Maximum (GM) EIG sampling or Minimum Spanning Tree (MST) sampling in each trial, which is determined by the test budget. The proposed method has been validated on both simulated and real-world datasets, where it shows higher preference aggregation ability than the state-of-the-art methods
    corecore